Config Helm Release for GKE Cluster

Learn how to create a different configuration file for a different environment.

Our application is up and running on Google Cloud, but a couple of things could still be done better, e.g., not using the kubectl command to open the kanban-frontend main page. The best approach would be to have a URL that can be used by anyone, so we can share it.

Also in this lesson, we’ll replace our PostgreSQL database that is installed on a cluster with an external database located on a different cloud or server. By switching our database from Kubernetes to another place we can get rid of the maintenance burden. Especially if we have only intermediate or basic knowledge of the database.

Finally, this lesson aims to help us start using our own Helm chart and see how easy it is to reuse it in a different environment. With only a few tweaks we can do almost anything.

Note: As usual, at the end of the lesson, there is an interactive sandbox with all the code that was discussed.

Set up load balancer#

Let’s start with exposing kanban-frontend to the internet. There are a couple of ways we could approach it. One way, which is probably the best one but at the same time the most work-intensive, would be to create an Ingress. Another approach would be to change the type of kanban-frontend's Service to NodePort, but that’s usually not advised.

Luckily for us, we’ve deployed our application on the Google Cloud, which supports the LoadBalancer Service type. If we replace ClusterIP in the Service definition we’ll expose our application to the Internet.

Because we don’t want to make any changes in a Helm chart itself we need to create a values.yaml file that will override this value. So here it is, the gke.yaml file:

GKE specific configuration of the kanban-frontend Helm release

Remember, to override the value from a child Helm chart we need to put it in the property with the same name as the Helm chart. In our case, it’s kanban-frontend.

And that’s it! We’ll test it together with changes from the next paragraph.

Change PostgreSQL connection#

The second change that we want to introduce is not to deploy a PostgreSQL database on a cluster and use an external one instead. Heroku Postgres was selected because it has a free plan, which suits our needs and is considered easier to connect with (because it doesn’t require additional authorization) compared to using the SQL service provided by Google Cloud. But it doesn’t matter where the database is located, we can pick other sources as well.

Similar to the previous example, the only thing that we need to do is to override a couple of values from the child Helm chart, this time kanban-backend, and put it into the gke.yaml file, as follows:

GKE specific configuration of the kanban-frontend Helm release

Note: Please be aware that the above inputs, i.e., PostgreSQL host, database, and user credentials, are not valid. They’re just examples and we need to replace them with ours to make it work

Apart from injecting new connection details to the Pod we also prevent PostgreSQL database creation by setting the database.enabled to false.

Apply both changes#

To verify that everything is working, run the following command:

Updating the kanban Helm release on GKE cluster

After a couple of seconds, the application should be updated. We can verify the first Services that were created:

Listing the Kubernetes Services

The output will be as follows:

NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
kanban-backend    ClusterIP      10.10.130.7     <none>          8080/TCP         4m47s
kanban-frontend   LoadBalancer   10.10.131.197   34.116.240.87   8080:30979/TCP   4m47s

We can see that the Service for the kanban-frontend now has the type LoadBalancer and has the external IP. We can get similar information when we get into its details:

Showing details about the kanban-frontend Service

The output will be as follows:

Name:                     kanban-frontend
Namespace:                kanban
Labels:                   app=kanban-frontend
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=kanban-frontend
                          app.kubernetes.io/version=kanban-2
                          group=frontend
Annotations:              cloud.google.com/neg: {"ingress":true}
                          meta.helm.sh/release-name: kanban
                          meta.helm.sh/release-namespace: kanban
Selector:                 app=kanban-frontend
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.10.131.197
IPs:                      10.10.131.197
LoadBalancer Ingress:     34.116.240.87
Port:                     <unset>  8080/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30979/TCP
Endpoints:                10.10.1.2:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age                From                Message
  ----    ------                ----               ----                -------
  Normal  Type                  2m48s              service-controller  ClusterIP -> LoadBalancer
  Normal  EnsuringLoadBalancer  2m48s              service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   2m                 service-controller  Ensured load balancer
  Normal  UpdatedLoadBalancer   52(x2 over 69s)  service-controller  Updated load balancer with new hosts

From both outputs, we can see that the IP address to the frontend application is 34.116.240.87. In your case, it will be different. In the above case, when we provide http://34.116.240.87:8080 in a browser we’ll be able to see Kanban’s main page (right now the application is shut down, so we won’t be able to reach it).

Apart from checking this in a terminal, we do the same for Google Cloud console in a browser. Here is a list view of all the Services created for our cluster:

Kubernetes Services in kanban-cluster
Kubernetes Services in kanban-cluster

And here is a detailed view of the kanban-frontend Service, where we can see all the information:

The kanban-frontend Service details
The kanban-frontend Service details

As we can see, from the Google Cloud Platform (GCP)  Console, we can read a lot of useful information.

To help us remember all the handy commands for GKE, here are some hints:

  • Log in to gcloud using gcloud auth login
  • Create a cluster using gcloud container clusters create-auto kanban-cluster --region=<YOUR-REGION> --project=<YOUR-PROJECT-ID>
  • Update the kubectl config using gcloud container clusters get-credentials kanban-cluster --region=<YOUR-REGION> --project=<YOUR-PROJECT-ID>
  • Delete the cluster using gcloud container clusters delete kanban-cluster --region=<YOUR-REGION> --project=<YOUR-PROJECT-ID>
/
kanban-backend
templates
.helmignore
Chart.lock
Chart.yaml
values.yaml
kanban-frontend
kanban
gke.yaml
local.yaml

Install a Helm Chart on GKE

Quiz: Using Helm on a Cloud